Skip to content

Add support for batched tasks.#668

Merged
bosilca merged 4 commits into
ICLDisco:masterfrom
bosilca:topic/batched_tasks
May 11, 2026
Merged

Add support for batched tasks.#668
bosilca merged 4 commits into
ICLDisco:masterfrom
bosilca:topic/batched_tasks

Conversation

@bosilca
Copy link
Copy Markdown
Contributor

@bosilca bosilca commented Sep 10, 2024

The idea is the following:

  • tasks incarnations (aka. BODY) can be marked with the "batch" property allowing the runtime to provide the task with the entire list of ready tasks of the execution stream instead of just extracting the head.
  • this task kernel can acquire additional tasks from the stream's ready_list, and build a ring. While the batch group will be submitted for execution (user responsibility), the rest of the tasks, that remains in the stream's ready_list, will be added back into the stream pending list, in the order in which they were provided originally.
  • upon completion the user will return the ring of tasks and the runtime will merge it properly for the next stage.

Comment thread parsec/mca/device/device_gpu.c Outdated
Comment thread parsec/interfaces/ptg/ptg-compiler/jdf2c.c Outdated
@bosilca bosilca force-pushed the topic/batched_tasks branch from 88bbbd3 to 41fa201 Compare May 9, 2026 06:32
@bosilca bosilca requested a review from abouteiller May 9, 2026 06:42
@bosilca bosilca force-pushed the topic/batched_tasks branch 2 times, most recently from e56075b to 3783e42 Compare May 11, 2026 13:49
Comment thread parsec/mca/device/device.c Outdated
Comment thread parsec/mca/device/device.h Outdated
Comment thread parsec/mca/device/device_gpu.c Outdated
Comment thread tests/dsl/dtd/dtd_test_simple_gemm.c Outdated
Comment thread parsec/mca/device/device_gpu.c
@devreal
Copy link
Copy Markdown
Contributor

devreal commented May 11, 2026

I might have reviewed a stale version (before the last commit). I like the approach and I think we can integrate that into TTG. The docs that were just added should be good to work this out.

@bosilca bosilca force-pushed the topic/batched_tasks branch 4 times, most recently from 7dd4d27 to b0a641e Compare May 11, 2026 16:41
Comment thread parsec/mca/device/device_gpu.c
Comment on lines +237 to +284
current_gpu_task = gpu_task;
do {
double *A, *B, *C;
int m, n, k, mb, nb, kb;
parsec_task_t *this_task = current_gpu_task->ec;
struct timeval start, end, diff;
double delta;
double *a_gpu, *b_gpu, *c_gpu;

parsec_dtd_unpack_args(this_task,
&A, &B, &C,
&m, &n, &k,
&mb, &nb, &kb);
(void)A; (void)B; (void)C;

a_gpu = parsec_dtd_get_dev_ptr(this_task, 0);
b_gpu = parsec_dtd_get_dev_ptr(this_task, 1);
c_gpu = parsec_dtd_get_dev_ptr(this_task, 2);

PARSEC_CUDA_CHECK_ERROR("cublasDgemm_v2", status,
{ return PARSEC_HOOK_RETURN_ERROR; });
gettimeofday(&start, NULL);

status = cublasDgemm_v2(handle,
CUBLAS_OP_N, CUBLAS_OP_N,
mb, nb, kb,
one_device, a_gpu, mb,
b_gpu, kb,
one_device, c_gpu, mb);
gettimeofday(&end, NULL);
timersub(&end, &start, &diff);
delta = (double)diff.tv_sec + (double)diff.tv_usec/1e6;
if(verbose) {
fprintf(stderr, "GEMM(%d, %d, %d) with tiles of %dx%d, %dx%d, %dx%d on node %d, GPU %s submitted in %g s%s\n",
m, n, k, mb, kb, kb, nb, mb, kb,
this_task->taskpool->context->my_rank,
gpu_stream->name, delta,
batch_count > 1 ? " as part of a batch" : "");
}

PARSEC_CUDA_CHECK_ERROR("cublasDgemm_v2", status,
{ return PARSEC_HOOK_RETURN_ERROR; });

current_gpu_task = (parsec_gpu_task_t *)current_gpu_task->list_item.list_next;
} while( current_gpu_task != gpu_task );

if( verbose && batch_count > 1 ) {
fprintf(stderr, "Submitted %d batched GEMM tasks on GPU stream %s\n",
batch_count, gpu_stream->name);
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think having an example that actually batches GEMM calls would be helpful. Here is one (untested, written in the browser)

Suggested change
current_gpu_task = gpu_task;
do {
double *A, *B, *C;
int m, n, k, mb, nb, kb;
parsec_task_t *this_task = current_gpu_task->ec;
struct timeval start, end, diff;
double delta;
double *a_gpu, *b_gpu, *c_gpu;
parsec_dtd_unpack_args(this_task,
&A, &B, &C,
&m, &n, &k,
&mb, &nb, &kb);
(void)A; (void)B; (void)C;
a_gpu = parsec_dtd_get_dev_ptr(this_task, 0);
b_gpu = parsec_dtd_get_dev_ptr(this_task, 1);
c_gpu = parsec_dtd_get_dev_ptr(this_task, 2);
PARSEC_CUDA_CHECK_ERROR("cublasDgemm_v2", status,
{ return PARSEC_HOOK_RETURN_ERROR; });
gettimeofday(&start, NULL);
status = cublasDgemm_v2(handle,
CUBLAS_OP_N, CUBLAS_OP_N,
mb, nb, kb,
one_device, a_gpu, mb,
b_gpu, kb,
one_device, c_gpu, mb);
gettimeofday(&end, NULL);
timersub(&end, &start, &diff);
delta = (double)diff.tv_sec + (double)diff.tv_usec/1e6;
if(verbose) {
fprintf(stderr, "GEMM(%d, %d, %d) with tiles of %dx%d, %dx%d, %dx%d on node %d, GPU %s submitted in %g s%s\n",
m, n, k, mb, kb, kb, nb, mb, kb,
this_task->taskpool->context->my_rank,
gpu_stream->name, delta,
batch_count > 1 ? " as part of a batch" : "");
}
PARSEC_CUDA_CHECK_ERROR("cublasDgemm_v2", status,
{ return PARSEC_HOOK_RETURN_ERROR; });
current_gpu_task = (parsec_gpu_task_t *)current_gpu_task->list_item.list_next;
} while( current_gpu_task != gpu_task );
if( verbose && batch_count > 1 ) {
fprintf(stderr, "Submitted %d batched GEMM tasks on GPU stream %s\n",
batch_count, gpu_stream->name);
}
current_gpu_task = gpu_task;
// NOTE: assuming all tasks have the same shape. Special handling needed otherwise.
int m, n, k, mb, nb, kb;
double** As = malloc(batch_count*sizeof(*As));
double** Bs = malloc(batch_count*sizeof(*Bs));
double** Cs = malloc(batch_count*sizeof(*Cs));
for (int i = 0; i < batch_count; ++i) {
double *A, *B, *C;
parsec_task_t *this_task = current_gpu_task->ec;
struct timeval start, end, diff;
double delta;
double *a_gpu, *b_gpu, *c_gpu;
parsec_dtd_unpack_args(this_task,
&A, &B, &C,
&m, &n, &k,
&mb, &nb, &kb);
(void)A; (void)B; (void)C;
As[i] = parsec_dtd_get_dev_ptr(this_task, 0);
Bs[i] = parsec_dtd_get_dev_ptr(this_task, 1);
Cs[i] = parsec_dtd_get_dev_ptr(this_task, 2);
current_gpu_task = (parsec_gpu_task_t *)current_gpu_task->list_item.list_next;
} while( current_gpu_task != gpu_task );
gettimeofday(&start, NULL);
status = cublasDgemmBatched(handle,
CUBLAS_OP_N, CUBLAS_OP_N,
mb, nb, kb,
one_device, As, mb,
Bs, kb,
one_device, Cs, mb,
batch_count);
gettimeofday(&end, NULL);
timersub(&end, &start, &diff);
delta = (double)diff.tv_sec + (double)diff.tv_usec/1e6;
if(verbose) {
fprintf(stderr, "GEMM(%d, %d, %d) with tiles of %dx%d, %dx%d, %dx%d on node %d, GPU %s submitted in %g s%s\n",
m, n, k, mb, kb, kb, nb, mb, kb,
this_task->taskpool->context->my_rank,
gpu_stream->name, delta,
batch_count > 1 ? " as part of a batch" : "");
}
PARSEC_CUDA_CHECK_ERROR("cublasDgemmBatched", status,
{ return PARSEC_HOOK_RETURN_ERROR; });
if( verbose && batch_count > 1 ) {
fprintf(stderr, "Submitted %d batched GEMM tasks on GPU stream %s\n",
batch_count, gpu_stream->name);
}

Copy link
Copy Markdown
Contributor Author

@bosilca bosilca May 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm sorry but this is not parsec role to batch gemm tasks. There is a clear way to do it, an example on how to do it. That should be more than enough for any PR !

That being said, this would be a really nice addition to DPLASMA ! Looking forward to your PR.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the benefit of "batching" kernels one by one into the same stream vs submitting them the same way we do today? I don't see how the above will yield any benefits so it's not a good example.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the worst case we are saving a CUDA event recording and tracking per "batched" task. Something you were very supportive for the PR that saved one event for tasks "with all input arguments available". And here we might save more than one, so even in the worst case still a positive outcome.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not quite. Here we're saving one at most per task (the event on the execution stream) but still have the input and output ones for each task. In #681 we are potentially saving two per task (the input and the output events). Combined, this can significantly reduce the number of events overall.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is absolutely no requirement to have the input/output event for each task. This optimization is actually complementary to the one in #681.

Comment on lines +15 to +23
For PTG-generated tasks, use the `batch = true` body property on a device body:

```c
BODY [type=CUDA
batch = true
dyld=cublasDgemm dyldtype=cublas_dgemm_t]
{
/* GPU submit body. */
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not clear how PTG tasks can actually batch kernel invocations. Simply stringing kernels together on the same stream won't save much.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The same way as DTD tasks, and the same way we did two years ago for the GB submission.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GB submissions are not part of the docs.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR has been lingering here for a very long time. Let's get it in.

@bosilca bosilca force-pushed the topic/batched_tasks branch from b0a641e to 6e5f669 Compare May 11, 2026 18:17
bosilca added 4 commits May 11, 2026 14:58
The idea is the following:
- tasks incarnations (aka. BODY) can be marked with the "batch" property
  allowing the runtime to provide the task with the entire list of ready
  tasks of the execution stream instead of just extracting the head.
- this list of ready tasks is in fact a ring, that can then be trimmed
  by the kernel and divided into batch and the rest. The rest of the
  tasks will be left in the ring, while the batch group will be
  submitted for execution.
- the kernel also needs to provide a callback into the gpu_task
  complete_stage, such that the runtime can call the specialized
  function able to complete all batched tasks.

Signed-off-by: George Bosilca <gbosilca@nvidia.com>
Replace the CUDA-specific batch build switch with
PARSEC_HAVE_DEV_CAPABILITY_BATCH so batching is a runtime capability
shared by all supported device types. Export the new option through
parsec_options and PaRSECConfig.

Add per-device MCA parameters to disable batching for CPU, recursive,
CUDA, HIP, and Level Zero devices. Use shared helpers to sanitize batch
chore types in DTD and to gate GPU task-ring batching on the selected
device.

Teach PTG to accept batch=true for CPU/default bodies as well as typed
device bodies, and add CPU batch examples for both PTG and DTD with
ctest coverage for the enabled and CPU-disabled DTD paths.

Signed-off-by: George Bosilca <gbosilca@nvidia.com>
Add parsec_gpu_task_collect_batch() so GPU submit hooks can build a
batched task ring by providing a compatibility callback, without directly
manipulating the stream pending FIFO.

Always pass a singleton task to the submit hook, and automatically chain a
returned non-singleton task ring into the next GPU stream pending queue.
This removes the need for user-provided completion callbacks whose only
purpose was to merge the batched ring back into fifo_pending.

Update the CUDA GEMM and stage_custom tests to use the new collector helper
and drop their open-coded FIFO iteration and batched completion callbacks.
Use PaRSEC's singleton terminology for the internal task-ring check.

Signed-off-by: George Bosilca <gbosilca@nvidia.com>
Add Doxygen documentation for device task batching, including how to
enable batching for PTG and DTD device chores.

Describe the recommended parsec_gpu_task_collect_batch() interface, its
callback contract, return values, and the runtime behavior for returned
non-singleton task rings.

Also document the lower-level direct FIFO/ring manipulation approach for
expert users that need to avoid the callback overhead in high-load
scenarios.

Link the new page from the main Doxygen page and include it in the
Doxygen input list.

Signed-off-by: George Bosilca <gbosilca@nvidia.com>
@bosilca bosilca force-pushed the topic/batched_tasks branch from 6e5f669 to a30718b Compare May 11, 2026 18:58
Copy link
Copy Markdown
Contributor

@devreal devreal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess we'll see benefits in DPLASMA once it's picked up there (not volunteering by any means...)

@bosilca bosilca merged commit 6751bd7 into ICLDisco:master May 11, 2026
12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants